Sparse Representations of Positive Functions via First- and Second-Order Pseudo-Mirror Descent
نویسندگان
چکیده
We consider expected risk minimization problems when the range of estimator is required to be nonnegative, motivated by settings maximum likelihood estimation (MLE) and trajectory optimization. To facilitate nonlinear interpolation, we hypothesize that search space a Reproducing Kernel Hilbert Space (RKHS). develop first second-order variants stochastic mirror descent employing (i) \emph{pseudo-gradients} (ii) complexity-reducing projections. Compressive projection in first-order scheme executed via kernel orthogonal matching pursuit (KOMP), which overcomes fact vanilla RKHS parameterization grows unbounded with iteration index setting. Moreover, pseudo-gradients are needed gradient estimates for cost only computable up some numerical error, arise in, e.g., integral approximations. Under constant step-size compression budget, establish tradeoffs between radius convergence sub-optimality budget parameter, as well non-asymptotic bounds on model complexity. refine solution's precision, extension employs recursively averaged pseudo-gradient outer-products approximate Hessian inverse, whose mean established under an additional eigenvalue decay condition optimal element, unique this work. Experiments demonstrate favorable performance inhomogeneous Poisson Process intensity practice.
منابع مشابه
The Representations and Positive Type Functions of Some Homogenous Spaces
‎For a homogeneous spaces ‎$‎G/H‎$‎, we show that the convolution on $L^1(G/H)$ is the same as convolution on $L^1(K)$, where $G$ is semidirect product of a closed subgroup $H$ and a normal subgroup $K $ of ‎$‎G‎$‎. ‎Also we prove that there exists a one to one correspondence between nondegenerat $ast$-representations of $L^1(G/H)$ and representations of ...
متن کاملFirst- and Second-Order Necessary Conditions Via Exact Penalty Functions
In this paper we study firstand second-order necessary conditions for nonlinear programming problems from the viewpoint of exact penalty functions. By applying the variational description of regular subgradients, we first establish necessary and sufficient conditions for a penalty term to be of KKT-type by using the regular subdifferential of the penalty term. In terms of the kernel of the subd...
متن کاملSparse Q-learning with Mirror Descent
This paper explores a new framework for reinforcement learning based on online convex optimization, in particular mirror descent and related algorithms. Mirror descent can be viewed as an enhanced gradient method, particularly suited to minimization of convex functions in highdimensional spaces. Unlike traditional gradient methods, mirror descent undertakes gradient updates of weights in both t...
متن کاملRobust Blind Deconvolution via Mirror Descent
We revisit the Blind Deconvolution problem with a focus on understanding its robustness and convergence properties. Provable robustness to noise and other perturbations is receiving recent interest in vision, from obtaining immunity to adversarial attacks to assessing and describing failure modes of algorithms in mission critical applications. Further, many blind deconvolution methods based on ...
متن کاملProvable Bayesian Inference via Particle Mirror Descent
Since the prox-mapping of stochastic mirror descent is intractable when directly being applied to the optimization problem (1), we propose the -inexact prox-mapping within the stochastic mirror descent framework in Section 3. Instead of solving the prox-mapping exactly, we approximate the solution with error. In this section, we will show as long as the approximation error is tolerate, the stoc...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Signal Processing
سال: 2022
ISSN: ['1053-587X', '1941-0476']
DOI: https://doi.org/10.1109/tsp.2022.3173146